Goto

Collaborating Authors

 average best average


Dynamic operator management in meta-heuristics using reinforcement learning: an application to permutation flowshop scheduling problems

Mamaghan, Maryam Karimi, Mohammadi, Mehrdad, Dullaert, Wout, Vigo, Daniele, Pirayesh, Amir

arXiv.org Artificial Intelligence

Using a portfolio of multiple search operators with different characteristics has been shown to improve the exploration and exploitation ability and, consequently, to enhance the overall performance of the meta-heuristics in solving different combinatorial optimization problems (COPs) [1, 2, 3, 4, 5]. From a theoretical perspective, the search space of a COP represents a non-stationary environment, meaning that the performance of different search operators varies depending on the region of the search space being explored. An operator working well in one region might be less effective in another region. Accordingly, incorporating a portfolio of diverse operators into a meta-heuristic is expected to enhance its overall performance [6]. For every COP, numerous search operators are available in the literature (either variations of the same operator with different configurations or entirely distinct operators), with the possibility of proposing new ones. Since the operators' performance is not pre-determined but rather dependent on the algorithm's performance on specific problems/instances, predicting the operators' performance proves challenging. Even if the most efficient operators could be determined, the order in which these efficient operators should be involved during the search process remains undetermined. Hence, optimizing the performance of a metaheuristic with multiple operators for solving different problem instances is always challenging [6, 7, 8, 9]. We label this problem as operator management problem in meta-heuristics, wherein the user should address two questions: What operators should I include in the portfolio?, and How (in which order) should I involve the in-portfolio operators during the search process?


Improving k-Means Clustering Performance with Disentangled Internal Representations

Agarap, Abien Fred, Azcarraga, Arnulfo P.

arXiv.org Machine Learning

Deep clustering algorithms combine representation learning and clustering by jointly optimizing a clustering loss and a non-clustering loss. In such methods, a deep neural network is used for representation learning together with a clustering network. Instead of following this framework to improve clustering performance, we propose a simpler approach of optimizing the entanglement of the learned latent code representation of an autoencoder. We define entanglement as how close pairs of points from the same class or structure are, relative to pairs of points from different classes or structures. To measure the entanglement of data points, we use the soft nearest neighbor loss, and expand it by introducing an annealing temperature factor. Using our proposed approach, the test clustering accuracy was 96.2% on the MNIST dataset, 85.6% on the Fashion-MNIST dataset, and 79.2% on the EMNIST Balanced dataset, outperforming our baseline models.